Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Intervalo de ano de publicação
1.
Span J Psychol ; 23: e8, 2020 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-32434622

RESUMO

In the present study, we extended the issue of how people access emotion through nonverbal information by testing the effects of simple (tempo) and complex (timbre) acoustic features of music on felt emotion. Three- to six-year-old young children (n = 100; 48% female) and university students (n = 64; 37.5% female) took part in three experiments in which acoustic features of music were manipulated to determine whether there are links between perceived emotion and felt emotion in processing musical segments. After exposure to segments of music, participants completed a felt emotion judgment task. The chi-square test showed significant tempo effects, ps < .001 (Exp. 1), and strong combined effects of mode and tempo on felt emotion. In addition, strength of these effects changed across age. However, these combined effects were significantly stronger under the tempo-and-mode consistent condition, ps < .001 (Exp. 2) than inconsistent condition (Exp. 3). In other words, simple versus complex acoustic features had stronger effects on felt emotion, and that sensitivity to these features, especially complex features, changed across age. These findings suggest that felt emotion evoked by acoustic features of a given piece of music might be affected by both innate abilities and by the strength of mappings between acoustic features and emotion.


Assuntos
Atenção , Percepção Auditiva , Emoções , Música , Fatores Etários , Aptidão , Criança , Pré-Escolar , Feminino , Humanos , Instinto , Julgamento , Masculino , Percepção da Altura Sonora , Psicoacústica , Percepção do Tempo , Adulto Jovem
2.
Span. j. psychol ; 23: e8.1-e8.16, 2020. ilus, tab, graf
Artigo em Inglês | IBECS | ID: ibc-196583

RESUMO

In the present study, we extended the issue of how people access emotion through nonverbal information by testing the effects of simple (tempo) and complex (timbre) acoustic features of music on felt emotion. Three- to six-year-old young children (n = 100; 48% female) and university students (n = 64; 37.5% female) took part in three experiments in which acoustic features of music were manipulated to determine whether there are links between perceived emotion and felt emotion in processing musical segments. After exposure to segments of music, participants completed a felt emotion judgment task. The chi-square test showed significant tempo effects, ps < .001 (Exp. 1), and strong combined effects of mode and tempo on felt emotion. In addition, strength of these effects changed across age. However, these combined effects were significantly stronger under the tempo-and-mode consistent condition, ps < .001 (Exp. 2) than inconsistent condition (Exp. 3). In other words, simple versus complex acoustic features had stronger effects on felt emotion, and that sensitivity to these features, especially complex features, changed across age. These findings suggest that felt emotion evoked by acoustic features of a given piece of music might be affected by both innate abilities and by the strength of mappings between acoustic features and emotion


No disponible


Assuntos
Humanos , Masculino , Feminino , Pré-Escolar , Criança , Adolescente , Adulto Jovem , Emoções , Terapia Focada em Emoções/métodos , Musicoterapia , Estimulação Acústica/psicologia , Música/psicologia , Expressão Facial , Reflexo
3.
Front Psychol ; 9: 211, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29520249

RESUMO

Recent findings have shown that information about changes in an object's environmental location in the context of discourse is stored in working memory during sentence comprehension. However, in these studies, changes in the object's location were always consistent with world knowledge (e.g., in "The writer picked up the pen from the floor and moved it to the desk," the floor and the desk are both common locations for a pen). How do people accomplish comprehension when the object-location information in working memory is inconsistent with world knowledge (e.g., a pen being moved from the floor to the bathtub)? In two visual world experiments, with a "look-and-listen" task, we used eye-tracking data to investigate comprehension of sentences that described location changes under different conditions of appropriateness (i.e., the object and its location were typically vs. unusually coexistent, based on world knowledge) and antecedent context (i.e., contextual information that did vs. did not temporarily normalize unusual coexistence between object and location). Results showed that listeners' retrieval of the critical location was affected by both world knowledge and working memory, and the effect of world knowledge was reduced when the antecedent context normalized unusual coexistence of object and location. More importantly, activation of world knowledge and working memory seemed to change during the comprehension process. These results are important because they demonstrate that interference between world knowledge and information in working memory, appears to be activated dynamically during sentence comprehension.

4.
Scand J Psychol ; 58(4): 294-303, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28718970

RESUMO

Using a non-alphabetic language (e.g., Chinese), the present study tested a novel view that semantic information at the sublexical level should be activated during handwriting production. Over 80% of Chinese characters are phonograms, in which semantic radicals represent category information (e.g., 'chair,' 'peach,' 'orange' are related to plants) while phonetic radicals represent phonetic information (e.g., 'wolf,' 'brightness,' 'male,' are all pronounced /lang/). Under different semantic category conditions at the lexical level (semantically related in Experiment 1; semantically unrelated in Experiment 2), the orthographic relatedness and semantic relatedness of semantic radicals in the picture name and its distractor were manipulated under different SOAs (i.e., stimulus onset asynchrony, the interval between the onset of the picture and the onset of the interference word). Two questions were addressed: (1) Is it possible that semantic information could be activated in the sublexical level conditions? (2) How are semantic and orthographic information dynamically accessed in word production? Results showed that both orthographic and semantic information were activated under the present picture-word interference paradigm, dynamically under different SOAs, which supported our view that discussions on semantic processes in the writing modality should be extended to the sublexical level. The current findings provide possibility for building new orthography-phonology-semantics models in writing.


Assuntos
Escrita Manual , Inibição Psicológica , Reconhecimento Visual de Modelos/fisiologia , Psicolinguística , Semântica , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
5.
Front Psychol ; 7: 1623, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27822192

RESUMO

Since the 1990s, there has been much discussion about how concepts are learned and processed. Many researchers believe that the experienced bodily states (i.e., embodied experiences) should be an important factor that affects concepts' learning and use, and metaphorical mappings between abstract concepts, such as TIME and POWER, and concrete concepts, such as SPATIAL ORIENTATION, STRUCTURED EXPERIENCEs, etc., suggest the abstract-concrete concepts' connections. In most of the recent literature, we can find common elements (e.g., concrete concepts) shared by different abstract-concrete metaphorical expressions. Therefore, we assumed that mappings might also be found between two abstract concepts that share common elements, though they have no symbolic connections. In the present study, two lexical decision tasks were arranged and the priming effect between TIME and ABSTRACT ACTIONs was used as an index to test our hypothesis. Results showed a robust priming effect when a target verb and its prime belonged to the same duration type (TIME consistent condition). These findings suggest that mapping between concepts was affected by common elements. We propose a dynamic model in which mappings between concepts are influenced by common elements, including symbolic or embodied information. What kind of elements (linguistic or embodied) can be used would depend on how difficult it is for a concept to be learned or accessed.

6.
Front Psychol ; 7: 916, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27379000

RESUMO

This paper advances the discussion on which emotion information affects word accessing. Emotion information, which is formed as a result of repeated experiences, is primary and necessary in learning and representing word meanings. Previous findings suggested that valence (i.e., positive or negative) denoted by words can be automatically activated and plays a role in many significant cognitive processes. However, there has been a lack of discussion about whether discrete emotion information (i.e., happiness, anger, sadness, and fear) is also involved in these processes. According to the hierarchy model, emotions are considered organized within an abstract-to-concrete hierarchy, in which emotion prototypes are organized following affective valence. By controlling different congruencies of emotion relations (i.e., matches or mismatches between valences and prototypes of emotion), the present study showed both an evaluative congruency effect (Experiment 1) and a discrete emotional congruency effect (Experiment 2). These findings indicate that not only affective valences but also discrete emotions can be activated under the present priming lexical decision task. However, the present findings also suggest that discrete emotions might be activated at the later priming stage as compared to valences. The present work provides evidence that information about discrete emotion could be involved in word processing. This might be a result of subjects' embodied experiences.

7.
Neurosci Lett ; 610: 187-92, 2016 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-26589544

RESUMO

Chinese differs from most Indo-European languages in its phonological, lexical, and syntactic structures. One of its unique properties is the abundance of homophones at the monosyllabic/morphemic level, with the consequence that monosyllabic homophones are all ambiguous in speech perception. Two-morpheme Chinese words can be composed of two high homophone-density morphemes (HH words), two low homophone-density morphemes (LL words), or one high and one low homophone-density morphemes (LH or HL words). The assumption of a simple inhibitory homophone effect is called into question in the case of disyllabic spoken word recognition, in which the recognition of one morpheme is affected by semantic information given by the other. Event-related brain potentials (ERPs) were used to trace on-line competitions among morphemic homophones in accessing Chinese disyllables. Results showing significant differences in ERP amplitude when comparing LL and LH words, but not when comparing LL and HL words, suggested that the first morpheme cannot be accessed without feedback from the second morpheme. Most importantly, analyses of N400 amplitude among different densities showed a converse homophone effect in which LL words, rather than LH or HL words, triggered larger N400. These findings provide strong evidence of a dynamic integration system at work during spoken Chinese disyllable recognition.


Assuntos
Potenciais Evocados Auditivos , Percepção da Fala , Encéfalo/fisiologia , Eletroencefalografia , Feminino , Humanos , Masculino , Reconhecimento Psicológico , Semântica , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...